Brain tumor detector¶
Nutshell¶
In this project I build a program that detects and localizes cancer from images of human brains, as explained on the course Modern Artificial Intelligence, lectured by Dr. Ryan Ahmed, Ph.D. MBA.
I will train two models which will
- classify the images either containing cancer tumor or not
- localizes the tumor within the brain
Introduction to the Brain Tumor Detection¶
Deep learning has proven to be as good and even better than humans in detecting diseases from X-rays, MRI scans and CT scans. there is huge potential in using AI to speed up and improve the accuracy of diagnosis. This project will use the labeled dataset from https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation which consists of 3929 Brain MRI scans and the tumor location. The final pipeline has a two step process where
- A Resnet deep learning classifier model will classify the input images into two groups: tumor detected and tumor not detected.
- For the images, where tumor was detected, a second step is performed, where a ResUNet segmentation model detects the tumor location on the pixel level.
Image segmentation¶
Image segmentation extracts information from images on the level of pixels. It is used for object recognition and localization in applications like medical imaging and self-driving cars. Image segmentation produces a pixel-wise mask of the image with deep learning approaches using common architectures such as CNN, FNNs and Deep Encoders-Decoders.
With Unet, the input and the output have the same size so the size of the images is preserved. In contrast to the CNN image classification, where the image is converted to a vector and the entire image is classified as a class label, the Unet performs classification on pixel level. Unet formulates a loss function for every pixel and then a softmax function is applied to every pixel. In other words, the segmentation problem is solved as a classification problem.
Looking into the data¶
We have a csv file that contains the patient IDs, the locations of the images, their masks and indicator if there is a tumor in the image (1 - tumor, 0 - healthy). There are 1373 images with tumors and 2556 healthy brain images. Thus, the dataset is imbalanced.
<class 'pandas.core.frame.DataFrame'> RangeIndex: 3929 entries, 0 to 3928 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patient_id 3929 non-null object 1 image_path 3929 non-null object 2 mask_path 3929 non-null object 3 mask 3929 non-null int64 dtypes: int64(1), object(3) memory usage: 122.9+ KB
| patient_id | image_path | mask_path | mask | |
|---|---|---|---|---|
| 0 | TCGA_CS_5395_19981004 | TCGA_CS_5395_19981004/TCGA_CS_5395_19981004_1.tif | TCGA_CS_5395_19981004/TCGA_CS_5395_19981004_1_... | 0 |
| 1 | TCGA_CS_5395_19981004 | TCGA_CS_4944_20010208/TCGA_CS_4944_20010208_1.tif | TCGA_CS_4944_20010208/TCGA_CS_4944_20010208_1_... | 0 |
| 2 | TCGA_CS_5395_19981004 | TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1.tif | TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1_... | 0 |
| 3 | TCGA_CS_5395_19981004 | TCGA_CS_4943_20000902/TCGA_CS_4943_20000902_1.tif | TCGA_CS_4943_20000902/TCGA_CS_4943_20000902_1_... | 0 |
| 4 | TCGA_CS_5395_19981004 | TCGA_CS_5396_20010302/TCGA_CS_5396_20010302_1.tif | TCGA_CS_5396_20010302/TCGA_CS_5396_20010302_1_... | 0 |
Visualisation of the datasets¶
Below is an example of an MRI image and the matching mask. This example has a small tumor. In images where no tumor is present, the mask will be completely black.
Below are visualisations from 6 MRIs and their overlayed masks in rose color to get a sense of the data that I will be using in this project.
Convolutional neural networks (CNNs)¶
- The first CNN layers are used to extract high level general features
- The last couple of layers will perform classification
- Locla respective fields scan the image first searching for simple shapes such as edges and lines
- The edges are picked up by the subsequent layer to form more complex features
A good visualisation of the feature extraction with convolutions can be found at https://setosa.io/ev/image-kernels/
ResNet (Residual Network)¶
- As CNNs grow deeper, vanishing gradients negatively impact the network performance. Vanishing gradient occurs when the gradient is backpropagated to earlier layers which results in a very small gradient.
- ResNets "skip connection" feature can allow training of 152 layers without vanishing gradient problems
- ResNet adds "identity mapping on top of the CNN
- ResNet deep network is trained with ImageNet, which contains 11 million images and 11 000 categories
ResNet paper (He etal, 2015): https://arxiv.org/pdf/1512.03385
As seen in the Figure 6. from the Resnet paper, the ResNet architectures overcome the training challenges from deep networks compared ot the plain networks. ResNet-152 achieved 3.58% error rate on the ImageNet dataset. This is better than human performance.
Siddarth Das has made a great comparison of CNN architecture performances, you can check it out here: https://medium.com/analytics-vidhya/cnns-architectures-lenet-alexnet-vgg-googlenet-resnet-and-more-666091488df5
Transfer learning¶
Transfer learning retrains a network that has been trained to perform a specific task to use it in a similar task. The use of a pretrained model can drastically reduce the computational time and the amount of training data required, compared to starting from scratch. It can be compared to a salsa dancer starting to learn bachata; he/she will probably do a lot better than a person who has never danced before.
There are two main strategies in transfer learning:
- Freeze the trained CNN network weights from the first layers and the train newly added dense layers. The new layers are initialized with random weights.
- Retrain the entire CNN network while setting the learning rate to be very small. With too large learning rate the already trained weights might be changed too dramatically.
In this project I will use the approach 1.
Transfer learning has it's own challenges:
- Negative Transfer: the source task/domain is “close enough to look useful” but actually pushes the model in the wrong direction, hurting performance compared to training from scratch. This occurs when the features of old and new tasks are not related.
- Which layers to transfer / freeze: deciding what to reuse vs retrain is nontrivial; freezing too much can underfit, unfreezing too much can overfit or destabilize training.
- Representation misalignment: even if tasks are related, the internal features might not separate target classes well, especially when target cues differ (e.g., medical imaging vs natural images).
- Transfer bounds: Measuring the amount of knowledge transferred is crucial to ensure model quality and robustness. It is worth considering, how to quantify this, and it is a subject of ongoing research.
This is a great resource for transfer learning from Dipanjan Sarkar: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a/
ResUNet¶
I will use ResUNet in the second part for the segmentation of the tumors.
- ResUNet architecture combines UNet backbone architecture and residual blocks
- The Unet uses Fully Convolutional Networks (FCN) and is adapted to perform well on segmentation tasks
- ResUNet has three parts:
- Encoder or contracting path
- Bottleneck
- Decoder or expansive path
The contraction path consists of several contraction blocks, which pass their input through res-blocks followed by 2x2 max-pooling. Feature maps after each block doubles, which helps the model learn complex features effectively.
The bottleneck part takes the input and then passes through a resblock, followed by 2x2 up-sampling convolution layers.
The decoder blocks take the up-sampled input from the previous layer and concatenates with the corresponding output features from the res-blocks in the contraction path. This is then passed through a resblock. This ensures that the features learned while contracting are used while reconstructing the image.
The final expansion layer output is passed through 1x1 convolution layer to produce the desired output with the same size as the input.
The original paper that introduced ResUNet: https://arxiv.org/pdf/1904.00592
Part 1: Training a classifier model to detect if tumor exists or not¶
I use the flow_from_dataframe for training. Batch size = 16 class mode = categorical
# @title Preparing image generators
train_generator = datagen.flow_from_dataframe(
dataframe = train,
directory = './',
x_col = 'image_path',
y_col = 'mask',
subset = 'training',
batch_size =16,
shuffle = True,
class_mode = 'categorical',
target_size = (256, 256)
)
valid_generator = datagen.flow_from_dataframe(
dataframe = train,
directory = './',
x_col = 'image_path',
y_col = 'mask',
subset = 'validation',
batch_size = 16,
shuffle = True,
class_mode = 'categorical',
target_size = (256, 256)
)
#create a data generator for test images
#no need for splitting again because here we use the "test" data set
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = datagen.flow_from_dataframe(
dataframe = test,
directory = './',
x_col = 'image_path',
y_col = 'mask',
batch_size = 16,
shuffle = False,
class_mode = 'categorical',
target_size = (256, 256)
)
Found 2839 validated image filenames belonging to 2 classes. Found 500 validated image filenames belonging to 2 classes. Found 590 validated image filenames belonging to 2 classes.
Below is the architecture of the ResNet50 model. For the transfer learning, all of these layers will be set to trainable = False to stop the weights from changing. The last layers in purple are the added layers which will be trained.
# @title Retrieve ResNet50 base model
#Input tensror 256 x 256 x 3
basemodel = ResNet50(weights = 'imagenet', include_top = False,
input_tensor = Input(shape = (256, 256, 3)))
# Add classification head to the base model
headmodel = basemodel.output
headmodel = AveragePooling2D(pool_size = (4, 4))(headmodel)
headmodel = Flatten(name = 'flatten')(headmodel)
headmodel = Dense(256, activation = 'relu')(headmodel)
headmodel = Dropout(0.3)(headmodel)
headmodel = Dense(2, activation = 'softmax')(headmodel)
fullmodel = Model(inputs = basemodel.input, outputs = headmodel)
# compile the model
fullmodel.compile(loss = 'categorical_crossentropy', optimizer='adam',
metrics=["accuracy"])
# use the early stopping to exit training
earlystopping = EarlyStopping(monitor='val_loss', mode='min', verbose = 1,
patience = 20)
# save the best model with least validation loss
checkpointer = ModelCheckpoint(filepath='classifier-resnet-weights.keras',
verbose=1, save_best_only=True)
# Callbacks: logs epoch results to CSV
csv_logger = CSVLogger(
model_base/"training_history_classifier_model_22-12-2025.csv",
append=True, # keep adding if file exists
separator=',' # comma-separated
)
if train_model:
history = fullmodel.fit(train_generator,
steps_per_epoch = train_generator.n // train_generator.batch_size,
epochs=25,
validation_data=valid_generator,
validation_steps= valid_generator.n // valid_generator.batch_size,
callbacks=[checkpointer, earlystopping])
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 11s/step - accuracy: 0.7406 - loss: 1.0108 Epoch 1: val_loss improved from inf to 5.11311, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 2272s 12s/step - accuracy: 0.7409 - loss: 1.0084 - val_accuracy: 0.6492 - val_loss: 5.1131 Epoch 2/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 46s 265ms/step - accuracy: 1.0000 - loss: 0.1226
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/epoch_iterator.py:116: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
Epoch 2: val_loss did not improve from 5.11311 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - accuracy: 1.0000 - loss: 0.1226 - val_accuracy: 0.6492 - val_loss: 5.1552 Epoch 3/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 226ms/step - accuracy: 0.8663 - loss: 0.3271 Epoch 3: val_loss improved from 5.11311 to 0.65219, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 45s 256ms/step - accuracy: 0.8663 - loss: 0.3271 - val_accuracy: 0.6472 - val_loss: 0.6522 Epoch 4/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 46s 267ms/step - accuracy: 0.9375 - loss: 0.2529 Epoch 4: val_loss improved from 0.65219 to 0.65116, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 5s 26ms/step - accuracy: 0.9375 - loss: 0.2529 - val_accuracy: 0.6472 - val_loss: 0.6512 Epoch 5/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 227ms/step - accuracy: 0.8795 - loss: 0.3138 Epoch 5: val_loss did not improve from 0.65116 177/177 ━━━━━━━━━━━━━━━━━━━━ 43s 240ms/step - accuracy: 0.8795 - loss: 0.3137 - val_accuracy: 0.6512 - val_loss: 2.4637 Epoch 6/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 31s 181ms/step - accuracy: 0.8750 - loss: 0.5172 Epoch 6: val_loss did not improve from 0.65116 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - accuracy: 0.8750 - loss: 0.5172 - val_accuracy: 0.6492 - val_loss: 2.8635 Epoch 7/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 194ms/step - accuracy: 0.8860 - loss: 0.2790 Epoch 7: val_loss did not improve from 0.65116 177/177 ━━━━━━━━━━━━━━━━━━━━ 37s 208ms/step - accuracy: 0.8860 - loss: 0.2788 - val_accuracy: 0.6532 - val_loss: 0.6581 Epoch 8/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 31s 181ms/step - accuracy: 0.9375 - loss: 0.2686 Epoch 8: val_loss did not improve from 0.65116 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 15ms/step - accuracy: 0.9375 - loss: 0.2686 - val_accuracy: 0.6552 - val_loss: 0.6583 Epoch 9/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 194ms/step - accuracy: 0.9208 - loss: 0.1985 Epoch 9: val_loss improved from 0.65116 to 0.58625, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 39s 220ms/step - accuracy: 0.9208 - loss: 0.1986 - val_accuracy: 0.7097 - val_loss: 0.5862 Epoch 10/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 44s 251ms/step - accuracy: 0.8750 - loss: 0.6005 Epoch 10: val_loss improved from 0.58625 to 0.58493, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 5s 28ms/step - accuracy: 0.8750 - loss: 0.6005 - val_accuracy: 0.7016 - val_loss: 0.5849 Epoch 11/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 233ms/step - accuracy: 0.9280 - loss: 0.1870 Epoch 11: val_loss improved from 0.58493 to 0.37330, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 46s 258ms/step - accuracy: 0.9280 - loss: 0.1871 - val_accuracy: 0.8367 - val_loss: 0.3733 Epoch 12/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 45s 260ms/step - accuracy: 0.9375 - loss: 0.1971 Epoch 12: val_loss did not improve from 0.37330 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 16ms/step - accuracy: 0.9375 - loss: 0.1971 - val_accuracy: 0.8448 - val_loss: 0.3847 Epoch 13/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 227ms/step - accuracy: 0.9334 - loss: 0.1942 Epoch 13: val_loss improved from 0.37330 to 0.17680, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 45s 254ms/step - accuracy: 0.9334 - loss: 0.1942 - val_accuracy: 0.9415 - val_loss: 0.1768 Epoch 14/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 44s 252ms/step - accuracy: 1.0000 - loss: 0.0706 Epoch 14: val_loss improved from 0.17680 to 0.17534, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 5s 28ms/step - accuracy: 1.0000 - loss: 0.0706 - val_accuracy: 0.9415 - val_loss: 0.1753 Epoch 15/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 229ms/step - accuracy: 0.9482 - loss: 0.1595 Epoch 15: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 44s 243ms/step - accuracy: 0.9481 - loss: 0.1595 - val_accuracy: 0.8690 - val_loss: 0.4619 Epoch 16/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 33s 191ms/step - accuracy: 0.9375 - loss: 0.1972 Epoch 16: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 16ms/step - accuracy: 0.9375 - loss: 0.1972 - val_accuracy: 0.8488 - val_loss: 0.5329 Epoch 17/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 193ms/step - accuracy: 0.9404 - loss: 0.1725 Epoch 17: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 37s 207ms/step - accuracy: 0.9404 - loss: 0.1725 - val_accuracy: 0.8629 - val_loss: 0.4133 Epoch 18/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 32s 186ms/step - accuracy: 0.8750 - loss: 0.3450 Epoch 18: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 16ms/step - accuracy: 0.8750 - loss: 0.3450 - val_accuracy: 0.8770 - val_loss: 0.3382 Epoch 19/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 195ms/step - accuracy: 0.9301 - loss: 0.1879 Epoch 19: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 37s 208ms/step - accuracy: 0.9301 - loss: 0.1878 - val_accuracy: 0.9234 - val_loss: 0.2028 Epoch 20/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 32s 183ms/step - accuracy: 0.9375 - loss: 0.1574 Epoch 20: val_loss did not improve from 0.17534 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 16ms/step - accuracy: 0.9375 - loss: 0.1574 - val_accuracy: 0.9254 - val_loss: 0.1922 Epoch 21/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 194ms/step - accuracy: 0.9541 - loss: 0.1313 Epoch 21: val_loss improved from 0.17534 to 0.16246, saving model to classifier-resnet-weights.keras 177/177 ━━━━━━━━━━━━━━━━━━━━ 39s 219ms/step - accuracy: 0.9541 - loss: 0.1313 - val_accuracy: 0.9516 - val_loss: 0.1625 Epoch 22/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 47s 267ms/step - accuracy: 1.0000 - loss: 0.0322 Epoch 22: val_loss did not improve from 0.16246 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 17ms/step - accuracy: 1.0000 - loss: 0.0322 - val_accuracy: 0.9496 - val_loss: 0.1651 Epoch 23/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 235ms/step - accuracy: 0.9626 - loss: 0.0960 Epoch 23: val_loss did not improve from 0.16246 177/177 ━━━━━━━━━━━━━━━━━━━━ 44s 249ms/step - accuracy: 0.9626 - loss: 0.0961 - val_accuracy: 0.9415 - val_loss: 0.1676 Epoch 24/25 1/177 ━━━━━━━━━━━━━━━━━━━━ 32s 184ms/step - accuracy: 0.9375 - loss: 0.1396 Epoch 24: val_loss did not improve from 0.16246 177/177 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - accuracy: 0.9375 - loss: 0.1396 - val_accuracy: 0.9456 - val_loss: 0.1633 Epoch 25/25 177/177 ━━━━━━━━━━━━━━━━━━━━ 0s 195ms/step - accuracy: 0.9668 - loss: 0.0948 Epoch 25: val_loss did not improve from 0.16246 177/177 ━━━━━━━━━━━━━━━━━━━━ 37s 208ms/step - accuracy: 0.9667 - loss: 0.0949 - val_accuracy: 0.8427 - val_loss: 0.8755
Assess classifier model performance¶
The model accuracy is 0.98
from sklearn.metrics import classification_report
print(classification_report(y_true, y_pred, labels = [0,1]))
precision recall f1-score support
0 0.98 0.98 0.98 383
1 0.97 0.96 0.96 207
micro avg 0.97 0.97 0.97 590
macro avg 0.97 0.97 0.97 590
weighted avg 0.97 0.97 0.97 590
Part 2: Building a segmentation model to localise tumors¶
(1373, 4)
Saved to: /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/segmentation_splits.json
def resblock(X, f):
# make a copy of input
X_copy = X
# main path
# Read more about he_normal: https://medium.com/@prateekvishnu/xavier-and-he-normal-he-et-al-initialization-8e3d7a087528
X = Conv2D(f, kernel_size = (1,1) ,strides = (1,1),kernel_initializer ='he_normal')(X)
X = BatchNormalization()(X)
X = Activation('relu')(X)
X = Conv2D(f, kernel_size = (3,3), strides =(1,1), padding = 'same', kernel_initializer ='he_normal')(X)
X = BatchNormalization()(X)
# Short path
# Read more here: https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33
X_copy = Conv2D(f, kernel_size = (1,1), strides =(1,1), kernel_initializer ='he_normal')(X_copy)
X_copy = BatchNormalization()(X_copy)
# Adding the output from main path and short path together
X = Add()([X,X_copy])
X = Activation('relu')(X)
return X
# function to upscale and concatenate the values passed
def upsample_concat(x, skip):
x = UpSampling2D((2,2))(x)
merge = Concatenate()([x, skip])
return merge
input_shape = (256,256,3)
# Input tensor shape
X_input = Input(input_shape)
# Stage 1
conv1_in = Conv2D(16,3,activation= 'relu', padding = 'same', kernel_initializer ='he_normal')(X_input)
conv1_in = BatchNormalization()(conv1_in)
conv1_in = Conv2D(16,3,activation= 'relu', padding = 'same', kernel_initializer ='he_normal')(conv1_in)
conv1_in = BatchNormalization()(conv1_in)
pool_1 = MaxPool2D(pool_size = (2,2))(conv1_in)
# Stage 2
conv2_in = resblock(pool_1, 32)
pool_2 = MaxPool2D(pool_size = (2,2))(conv2_in)
# Stage 3
conv3_in = resblock(pool_2, 64)
pool_3 = MaxPool2D(pool_size = (2,2))(conv3_in)
# Stage 4
conv4_in = resblock(pool_3, 128)
pool_4 = MaxPool2D(pool_size = (2,2))(conv4_in)
# Stage 5 (Bottle Neck)
conv5_in = resblock(pool_4, 256)
# Upscale stage 1
up_1 = upsample_concat(conv5_in, conv4_in)
up_1 = resblock(up_1, 128)
# Upscale stage 2
up_2 = upsample_concat(up_1, conv3_in)
up_2 = resblock(up_2, 64)
# Upscale stage 3
up_3 = upsample_concat(up_2, conv2_in)
up_3 = resblock(up_3, 32)
# Upscale stage 4
up_4 = upsample_concat(up_3, conv1_in)
up_4 = resblock(up_4, 16)
# Final Output
output = Conv2D(1, (1,1), padding = "same", activation = "sigmoid")(up_4)
model_seg = Model(inputs = X_input, outputs = output )
# @title Compiling the segmentation model
from utilities import tversky, tversky_loss, focal_tversky
adam = keras.optimizers.Adam(learning_rate = 0.05, epsilon = 0.1)
model_seg.compile(optimizer = adam, loss = focal_tversky, metrics = [tversky])
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 9s/step - loss: 0.8267 - tversky: 0.2203
/usr/local/lib/python3.12/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.
Epoch 1: val_loss improved from inf to 0.59689, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 760s 10s/step - loss: 0.8249 - tversky: 0.2224 - val_loss: 0.5969 - val_tversky: 0.4970 Epoch 2/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 259ms/step - loss: 0.4331 - tversky: 0.6690 Epoch 2: val_loss did not improve from 0.59689 72/72 ━━━━━━━━━━━━━━━━━━━━ 22s 304ms/step - loss: 0.4330 - tversky: 0.6691 - val_loss: 0.6728 - val_tversky: 0.4079 Epoch 3/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.3882 - tversky: 0.7145 Epoch 3: val_loss improved from 0.59689 to 0.36208, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 168ms/step - loss: 0.3879 - tversky: 0.7147 - val_loss: 0.3621 - val_tversky: 0.7410 Epoch 4/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.3287 - tversky: 0.7716 Epoch 4: val_loss did not improve from 0.36208 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.3287 - tversky: 0.7716 - val_loss: 0.3912 - val_tversky: 0.7129 Epoch 5/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.2861 - tversky: 0.8098 Epoch 5: val_loss improved from 0.36208 to 0.34319, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 166ms/step - loss: 0.2861 - tversky: 0.8099 - val_loss: 0.3432 - val_tversky: 0.7583 Epoch 6/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.2626 - tversky: 0.8300 Epoch 6: val_loss improved from 0.34319 to 0.32901, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.2627 - tversky: 0.8299 - val_loss: 0.3290 - val_tversky: 0.7722 Epoch 7/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.2603 - tversky: 0.8322 Epoch 7: val_loss improved from 0.32901 to 0.26115, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 166ms/step - loss: 0.2603 - tversky: 0.8322 - val_loss: 0.2612 - val_tversky: 0.8314 Epoch 8/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.2594 - tversky: 0.8330 Epoch 8: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.2593 - tversky: 0.8331 - val_loss: 0.3317 - val_tversky: 0.7696 Epoch 9/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.2281 - tversky: 0.8595 Epoch 9: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.2281 - tversky: 0.8595 - val_loss: 0.2777 - val_tversky: 0.8169 Epoch 10/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.2070 - tversky: 0.8770 Epoch 10: val_loss did not improve from 0.26115 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.2070 - tversky: 0.8770 - val_loss: 0.3404 - val_tversky: 0.7616 Epoch 11/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.2093 - tversky: 0.8744 Epoch 11: val_loss improved from 0.26115 to 0.24138, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.2092 - tversky: 0.8744 - val_loss: 0.2414 - val_tversky: 0.8488 Epoch 12/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.1751 - tversky: 0.9015 Epoch 12: val_loss improved from 0.24138 to 0.23228, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.1752 - tversky: 0.9015 - val_loss: 0.2323 - val_tversky: 0.8565 Epoch 13/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1897 - tversky: 0.8903 Epoch 13: val_loss improved from 0.23228 to 0.20230, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 164ms/step - loss: 0.1897 - tversky: 0.8903 - val_loss: 0.2023 - val_tversky: 0.8811 Epoch 14/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.1750 - tversky: 0.9015 Epoch 14: val_loss improved from 0.20230 to 0.20201, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 167ms/step - loss: 0.1750 - tversky: 0.9015 - val_loss: 0.2020 - val_tversky: 0.8810 Epoch 15/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1699 - tversky: 0.9054 Epoch 15: val_loss improved from 0.20201 to 0.20099, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 169ms/step - loss: 0.1699 - tversky: 0.9054 - val_loss: 0.2010 - val_tversky: 0.8819 Epoch 16/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1602 - tversky: 0.9125 Epoch 16: val_loss improved from 0.20099 to 0.19495, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1603 - tversky: 0.9125 - val_loss: 0.1949 - val_tversky: 0.8864 Epoch 17/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.1570 - tversky: 0.9150 Epoch 17: val_loss did not improve from 0.19495 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 160ms/step - loss: 0.1570 - tversky: 0.9150 - val_loss: 0.2004 - val_tversky: 0.8823 Epoch 18/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1478 - tversky: 0.9213 Epoch 18: val_loss improved from 0.19495 to 0.19305, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.1478 - tversky: 0.9212 - val_loss: 0.1931 - val_tversky: 0.8875 Epoch 19/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 151ms/step - loss: 0.1435 - tversky: 0.9245 Epoch 19: val_loss improved from 0.19305 to 0.18707, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 170ms/step - loss: 0.1435 - tversky: 0.9245 - val_loss: 0.1871 - val_tversky: 0.8928 Epoch 20/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 149ms/step - loss: 0.1428 - tversky: 0.9250 Epoch 20: val_loss improved from 0.18707 to 0.18585, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 167ms/step - loss: 0.1428 - tversky: 0.9250 - val_loss: 0.1859 - val_tversky: 0.8938 Epoch 21/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 149ms/step - loss: 0.1362 - tversky: 0.9296 Epoch 21: val_loss did not improve from 0.18585 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1363 - tversky: 0.9296 - val_loss: 0.1887 - val_tversky: 0.8909 Epoch 22/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1324 - tversky: 0.9322 Epoch 22: val_loss did not improve from 0.18585 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1325 - tversky: 0.9321 - val_loss: 0.2548 - val_tversky: 0.8371 Epoch 23/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1386 - tversky: 0.9278 Epoch 23: val_loss improved from 0.18585 to 0.18294, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.1386 - tversky: 0.9278 - val_loss: 0.1829 - val_tversky: 0.8961 Epoch 24/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.1323 - tversky: 0.9322 Epoch 24: val_loss did not improve from 0.18294 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 163ms/step - loss: 0.1323 - tversky: 0.9322 - val_loss: 0.2016 - val_tversky: 0.8809 Epoch 25/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.1302 - tversky: 0.9336 Epoch 25: val_loss improved from 0.18294 to 0.17648, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 165ms/step - loss: 0.1302 - tversky: 0.9337 - val_loss: 0.1765 - val_tversky: 0.9006 Epoch 26/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 151ms/step - loss: 0.1222 - tversky: 0.9391 Epoch 26: val_loss improved from 0.17648 to 0.16939, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 170ms/step - loss: 0.1222 - tversky: 0.9391 - val_loss: 0.1694 - val_tversky: 0.9062 Epoch 27/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 147ms/step - loss: 0.1144 - tversky: 0.9442 Epoch 27: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 160ms/step - loss: 0.1145 - tversky: 0.9442 - val_loss: 0.1742 - val_tversky: 0.9026 Epoch 28/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1111 - tversky: 0.9464 Epoch 28: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1112 - tversky: 0.9464 - val_loss: 0.1856 - val_tversky: 0.8932 Epoch 29/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 141ms/step - loss: 0.1145 - tversky: 0.9442 Epoch 29: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 153ms/step - loss: 0.1146 - tversky: 0.9442 - val_loss: 0.1869 - val_tversky: 0.8929 Epoch 30/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1109 - tversky: 0.9464 Epoch 30: val_loss did not improve from 0.16939 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1109 - tversky: 0.9464 - val_loss: 0.1718 - val_tversky: 0.9039 Epoch 31/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.1127 - tversky: 0.9453 Epoch 31: val_loss improved from 0.16939 to 0.16876, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 162ms/step - loss: 0.1127 - tversky: 0.9453 - val_loss: 0.1688 - val_tversky: 0.9060 Epoch 32/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 148ms/step - loss: 0.1066 - tversky: 0.9492 Epoch 32: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 161ms/step - loss: 0.1066 - tversky: 0.9492 - val_loss: 0.1761 - val_tversky: 0.9008 Epoch 33/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1026 - tversky: 0.9518 Epoch 33: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.1027 - tversky: 0.9518 - val_loss: 0.1950 - val_tversky: 0.8855 Epoch 34/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1071 - tversky: 0.9488 Epoch 34: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1072 - tversky: 0.9488 - val_loss: 0.1735 - val_tversky: 0.9030 Epoch 35/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1063 - tversky: 0.9494 Epoch 35: val_loss did not improve from 0.16876 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.1063 - tversky: 0.9494 - val_loss: 0.1840 - val_tversky: 0.8947 Epoch 36/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 150ms/step - loss: 0.1029 - tversky: 0.9516 Epoch 36: val_loss improved from 0.16876 to 0.16731, saving model to /content/drive/MyDrive/Colab Notebooks/brain-tumor-detector/Models/ResUNet-weights_22-12-2025.keras 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 168ms/step - loss: 0.1029 - tversky: 0.9516 - val_loss: 0.1673 - val_tversky: 0.9076 Epoch 37/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1047 - tversky: 0.9505 Epoch 37: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1047 - tversky: 0.9505 - val_loss: 0.1726 - val_tversky: 0.9034 Epoch 38/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0948 - tversky: 0.9567 Epoch 38: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0948 - tversky: 0.9566 - val_loss: 0.1846 - val_tversky: 0.8945 Epoch 39/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0965 - tversky: 0.9555 Epoch 39: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0965 - tversky: 0.9556 - val_loss: 0.1735 - val_tversky: 0.9030 Epoch 40/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.1000 - tversky: 0.9534 Epoch 40: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.1000 - tversky: 0.9534 - val_loss: 0.1733 - val_tversky: 0.9028 Epoch 41/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0919 - tversky: 0.9584 Epoch 41: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0920 - tversky: 0.9584 - val_loss: 0.1705 - val_tversky: 0.9054 Epoch 42/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0925 - tversky: 0.9580 Epoch 42: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0925 - tversky: 0.9580 - val_loss: 0.1703 - val_tversky: 0.9047 Epoch 43/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.1006 - tversky: 0.9529 Epoch 43: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 155ms/step - loss: 0.1005 - tversky: 0.9529 - val_loss: 0.1851 - val_tversky: 0.8939 Epoch 44/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0894 - tversky: 0.9598 Epoch 44: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 159ms/step - loss: 0.0894 - tversky: 0.9598 - val_loss: 0.1742 - val_tversky: 0.9024 Epoch 45/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0879 - tversky: 0.9608 Epoch 45: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0879 - tversky: 0.9607 - val_loss: 0.1711 - val_tversky: 0.9047 Epoch 46/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.0885 - tversky: 0.9604 Epoch 46: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 155ms/step - loss: 0.0885 - tversky: 0.9604 - val_loss: 0.1731 - val_tversky: 0.9030 Epoch 47/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0867 - tversky: 0.9615 Epoch 47: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0867 - tversky: 0.9615 - val_loss: 0.1741 - val_tversky: 0.9024 Epoch 48/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0846 - tversky: 0.9628 Epoch 48: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0846 - tversky: 0.9627 - val_loss: 0.1884 - val_tversky: 0.8911 Epoch 49/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.0890 - tversky: 0.9600 Epoch 49: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 12s 159ms/step - loss: 0.0890 - tversky: 0.9600 - val_loss: 0.1750 - val_tversky: 0.9016 Epoch 50/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0835 - tversky: 0.9634 Epoch 50: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0835 - tversky: 0.9634 - val_loss: 0.1840 - val_tversky: 0.8951 Epoch 51/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 143ms/step - loss: 0.0813 - tversky: 0.9646 Epoch 51: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0814 - tversky: 0.9646 - val_loss: 0.1725 - val_tversky: 0.9029 Epoch 52/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0816 - tversky: 0.9645 Epoch 52: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0816 - tversky: 0.9645 - val_loss: 0.1810 - val_tversky: 0.8973 Epoch 53/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 145ms/step - loss: 0.0837 - tversky: 0.9633 Epoch 53: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 158ms/step - loss: 0.0837 - tversky: 0.9633 - val_loss: 0.1816 - val_tversky: 0.8970 Epoch 54/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0798 - tversky: 0.9655 Epoch 54: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 156ms/step - loss: 0.0798 - tversky: 0.9655 - val_loss: 0.1885 - val_tversky: 0.8917 Epoch 55/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0798 - tversky: 0.9656 Epoch 55: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0798 - tversky: 0.9655 - val_loss: 0.1827 - val_tversky: 0.8961 Epoch 56/60 72/72 ━━━━━━━━━━━━━━━━━━━━ 0s 144ms/step - loss: 0.0826 - tversky: 0.9639 Epoch 56: val_loss did not improve from 0.16731 72/72 ━━━━━━━━━━━━━━━━━━━━ 11s 157ms/step - loss: 0.0826 - tversky: 0.9639 - val_loss: 0.1789 - val_tversky: 0.8989 Epoch 56: early stopping
The best model was reached on epoch 36. After that the validation loss did not improve and early stopping rule was reached (patience=20).
if not retrain_model:
df_his = pd.read_csv(model_base/"training_history_segmodel_22-12-2025.csv", sep=None, engine="python")
df_his = df_his.apply(pd.to_numeric, errors="coerce")
history_data = df_his.to_dict(orient="list")
else:
history_data = history.history # Keras History object
# ---- plot ----
out_path = image_base/"seg_model_train_history.png"
fig, ax = plt.subplots(figsize=(5, 3))
ax.plot(history_data["loss"], label="train_loss")
if "val_loss" in history_data and history_data["val_loss"] is not None:
ax.plot(history_data["val_loss"], label="val_loss")
ax.set_title("Segmentation Model Loss")
ax.set_ylabel("Loss")
ax.set_xlabel("Epoch")
ax.legend(loc="upper right")
fig.tight_layout()
fig.savefig(out_path, dpi=200, bbox_inches="tight", transparent=True)
plt.close(fig)
display(Image(filename=out_path,width=460))
Assessing the trained segmentation model performance¶
To assess the performance, the predicted and actual masks of 10 test cases are printed below. The model has not seen this data before.
1/1 ━━━━━━━━━━━━━━━━━━━━ 4s 4s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 46ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 51ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 48ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 35ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 44ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 48ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 39ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 44ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 45ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 38ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step